347 research outputs found
Sensitivity analysis of hybrid systems with state jumps with application to trajectory tracking
This paper addresses the sensitivity analysis for hybrid systems with
discontinuous (jumping) state trajectories. We consider state-triggered jumps
in the state evolution, potentially accompanied by mode switching in the
control vector field as well. For a given trajectory with state jumps, we show
how to construct an approximation of a nearby perturbed trajectory
corresponding to a small variation of the initial condition and input. A major
complication in the construction of such an approximation is that, in general,
the jump times corresponding to a nearby perturbed trajectory are not equal to
those of the nominal one. The main contribution of this work is the development
of a notion of error to clarify in which sense the approximate trajectory is,
at each instant of time, a firstorder approximation of the perturbed
trajectory. This notion of error naturally finds application in the (local)
tracking problem of a time-varying reference trajectory of a hybrid system. To
illustrate the possible use of this new error definition in the context of
trajectory tracking, we outline how the standard linear trajectory tracking
control for nonlinear systems -based on linear quadratic regulator (LQR) theory
to compute the optimal feedback gain- could be generalized for hybrid systems
Backstepping controller synthesis and characterizations of incremental stability
Incremental stability is a property of dynamical and control systems,
requiring the uniform asymptotic stability of every trajectory, rather than
that of an equilibrium point or a particular time-varying trajectory. Similarly
to stability, Lyapunov functions and contraction metrics play important roles
in the study of incremental stability. In this paper, we provide
characterizations and descriptions of incremental stability in terms of
existence of coordinate-invariant notions of incremental Lyapunov functions and
contraction metrics, respectively. Most design techniques providing controllers
rendering control systems incrementally stable have two main drawbacks: they
can only be applied to control systems in either parametric-strict-feedback or
strict-feedback form, and they require these control systems to be smooth. In
this paper, we propose a design technique that is applicable to larger classes
of (not necessarily smooth) control systems. Moreover, we propose a recursive
way of constructing contraction metrics (for smooth control systems) and
incremental Lyapunov functions which have been identified as a key tool
enabling the construction of finite abstractions of nonlinear control systems,
the approximation of stochastic hybrid systems, source-code model checking for
nonlinear dynamical systems and so on. The effectiveness of the proposed
results in this paper is illustrated by synthesizing a controller rendering a
non-smooth control system incrementally stable as well as constructing its
finite abstraction, using the computed incremental Lyapunov function.Comment: 23 pages, 2 figure
Finite Horizon Privacy of Stochastic Dynamical Systems:A Synthesis Framework for Dependent Gaussian Mechanisms
We address the problem of synthesizing distorting mechanisms that maximize privacy of stochastic dynamical systems. Information about the system state is obtained through sensor measurements. This data is transmitted to a remote station through an unsecured/public communication network. We aim to keep part of the system state private (a private output); however, because the network is unsecured, adversaries might access sensor data and input signals, which can be used to estimate private outputs. To prevent an accurate estimation, we pass sensor data and input signals through a distorting (privacy-preserving) mechanism before transmission, and send the distorted data to the trusted user. These mechanisms consist of a coordinate transformation and additive dependent Gaussian vectors. We formulate the synthesis of the distorting mechanisms as a convex program, where we minimize the mutual information (our privacy metric) between an arbitrarily large sequence of private outputs and the disclosed distorted data for desired distortion levels -- how different actual and distorted data are allowed to be
Gaussian Mechanisms Against Statistical Inference:Synthesis Tools
In this manuscript, we provide a set of tools (in terms of semidefinite programs) to synthesize Gaussian mechanisms to maximize privacy of databases. Information about the database is disclosed through queries requested by (potentially) adversarial users. We aim to keep part of the database private (private sensitive information); however, disclosed data could be used to estimate private information. To avoid an accurate estimation by the adversaries, we pass the requested data through distorting (privacy-preserving) mechanisms before transmission and send the distorted data to the user. These mechanisms consist of a coordinate transformation and an additive dependent Gaussian vector. We formulate the synthesis of distorting mechanisms in terms of semidefinite programs in which we seek to minimize the mutual information (our privacy metric) between private data and the disclosed distorted data given a desired distortion level -- how different actual and distorted data are allowed to be
Privacy-Preserving Federated Learning via System Immersion and Random Matrix Encryption
Federated learning (FL) has emerged as a privacy solution for collaborative
distributed learning where clients train AI models directly on their devices
instead of sharing their data with a centralized (potentially adversarial)
server. Although FL preserves local data privacy to some extent, it has been
shown that information about clients' data can still be inferred from model
updates. In recent years, various privacy-preserving schemes have been
developed to address this privacy leakage. However, they often provide privacy
at the expense of model performance or system efficiency and balancing these
tradeoffs is a crucial challenge when implementing FL schemes. In this
manuscript, we propose a Privacy-Preserving Federated Learning (PPFL) framework
built on the synergy of matrix encryption and system immersion tools from
control theory. The idea is to immerse the learning algorithm, a Stochastic
Gradient Decent (SGD), into a higher-dimensional system (the so-called target
system) and design the dynamics of the target system so that: the trajectories
of the original SGD are immersed/embedded in its trajectories, and it learns on
encrypted data (here we use random matrix encryption). Matrix encryption is
reformulated at the server as a random change of coordinates that maps original
parameters to a higher-dimensional parameter space and enforces that the target
SGD converges to an encrypted version of the original SGD optimal solution. The
server decrypts the aggregated model using the left inverse of the immersion
map. We show that our algorithm provides the same level of accuracy and
convergence rate as the standard FL with a negligible computation cost while
revealing no information about the clients' data
Feedback Motion Prediction for Safe Unicycle Robot Navigation
As a simple and robust mobile robot base, differential drive robots that can
be modelled as a kinematic unicycle find significant applications in logistics
and service robotics in both industrial and domestic settings. Safe robot
navigation around obstacles is an essential skill for such unicycle robots to
perform diverse useful tasks in complex cluttered environments, especially
around people and other robots. Fast and accurate safety assessment plays a key
role in reactive and safe robot motion design. In this paper, as a more
accurate and still simple alternative to the standard circular Lyapunov level
sets, we introduce novel conic feedback motion prediction methods for bounding
the close-loop motion trajectory of the kinematic unicycle robot model under a
standard unicycle motion control approach. We present an application of
unicycle feedback motion prediction for safe robot navigation around obstacles
using reference governors, where the safety of a unicycle robot is continuously
monitored based on the predicted future robot motion. We investigate the role
of motion prediction on robot behaviour in numerical simulations and conclude
that fast and accurate feedback motion prediction is key for fast, reactive,
and safe robot navigation around obstacles.Comment: 11 pages, 5 figures, extended version of a paper submitted to a
conference publicatio
Immersion and Invariance-based Coding for Privacy in Remote Anomaly Detection
We present a framework for the design of coding mechanisms that allow
remotely operating anomaly detectors in a privacy-preserving manner. We
consider the following problem setup. A remote station seeks to identify
anomalies based on system input-output signals transmitted over communication
networks. However, it is not desired to disclose true data of the system
operation as it can be used to infer private information. To prevent
adversaries from eavesdropping on the network or at the remote station itself
to access private data, we propose a privacy-preserving coding scheme to
distort signals before transmission. As a next step, we design a new anomaly
detector that runs on distorted signals and produces distorted diagnostics
signals, and a decoding scheme that allows extracting true diagnostics data
from distorted signals without error. The proposed scheme is built on the
synergy of matrix encryption and system Immersion and Invariance (I&I) tools
from control theory. The idea is to immerse the anomaly detector into a
higher-dimensional system (the so-called target system). The dynamics of the
target system is designed such that: the trajectories of the original anomaly
detector are immersed/embedded in its trajectories, it works on randomly
encoded input-output signals, and produces an encoded version of the original
anomaly detector alarm signals, which are decoded to extract the original alarm
at the user side. We show that the proposed privacy-preserving scheme provides
the same anomaly detection performance as standard Kalman filter-based
chi-squared anomaly detectors while revealing no information about system data.Comment: arXiv admin note: text overlap with arXiv:2211.0369
Privacy-Preserving Anomaly Detection in Stochastic Dynamical Systems: Synthesis of Optimal Gaussian Mechanisms
We present a framework for the design of distorting mechanisms that allow
remotely operating anomaly detectors in a privacy-preserving fashion. We
consider the problem setting in which a remote station seeks to identify
anomalies using system input-output signals transmitted over communication
networks. However, in such a networked setting, it is not desired to disclose
true data of the system operation as it can be used to infer private
information -- modeled here as a system private output. To prevent accurate
estimation of private outputs by adversaries, we pass original signals through
distorting (privacy-preserving) mechanisms and send the distorted data to the
remote station (which inevitably leads to degraded monitoring performance). The
design of these mechanisms is formulated as a privacy-utility (tradeoff)
problem where system utility is characterized by anomaly detection performance,
and privacy is quantified using information-theoretic metrics (mutual
information and differential entropy). We cast the synthesis of dependent
Gaussian mechanisms as the solution of a convex program (log-determinant cost
with linear matrix inequality constraints) where we seek to maximize privacy
over a finite window of realizations while guaranteeing a bound on monitoring
performance degradation. We provide simulation results to illustrate the
performance of the developed tools
- …